In-Context Retrieval-Augmented Language Models
https://arxiv.org/abs/2302.00083
#RAG
https://github.com/ai21labs/in-context-ralm
Figure 2
An example of In-Context RALM: we simply prepend the retrieved document before the input prefix
「World Cup 2022 was the last with 32 teams, before the increase to」の続きを生成
「FIFA World Cup 2026 will expand to 48 teams」をコンテキストに追加
Figure 4
perplexity 低いほどいい
どんなサイズのモデルでもRAGで性能上がる